LocalRateLimit(HTTP): Add dynamic token bucket support#36623
LocalRateLimit(HTTP): Add dynamic token bucket support#36623wbpcode merged 46 commits intoenvoyproxy:mainfrom
Conversation
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
|
CC @envoyproxy/api-shepherds: Your approval is needed for changes made to |
457f0a9 to
71ecf21
Compare
|
I think we still need a way to limit the overhead and memory of the token buckets. It's unacceptable to let it increases unlimited. |
Thanks a lot for taking a look. |
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
|
Thanks for this contribution. Dynamic descriptor support is a very complex problem in the local rate limit, considering various limitations. I have take a pass to current implementation, but before flush more comments to the implementation details, I will throw some quesions first:
So, I will suggest you to take a step back and think is there other way to resolve your requirement, like a new limit filter in your fork or using the rate_limit/rate_limit_quota directly. If you insist on to enhance the local_rate_limit, then, I think we should provide an abstraction (like DynamicDescriptor) to wrap all these complexity first (list/memory management, lifetime problem, cross workers updating, etc. (PS: I think lock may be an option if we can limit the lock only be used in the new feature)), and then, we could integrate the high level abstraction into the local_rate_limit. Or it's hard for reviewer to ensure the quality of the exist feature. Thanks again for all your help and contribution. This is not simple feature. 🌷 |
Thanks a lot @wbpcode for taking a look. Really appreciate!!
From the conversations on the linked issues, seems like there has been repetitive request for this functionality since long time, so figuring out a solution in community might benefit number of users.
Sounds good. I will work on adding the abstraction as per your suggestion. |
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
|
Thanks so much for this update. I think this make sense. Here are some high level suggestions to this (I think we are in the correct way, thanks):
|
| // Actual number of dynamic descriptors will depend on the cardinality of unique values received from the http request for the omitted | ||
| // values. | ||
| // Default is 20. | ||
| uint32 dynamic_descripters_lru_cache_limit = 18; |
There was a problem hiding this comment.
May name this simplely max_dynamic_descripters. (We may change the elimination algorithm in future, who know?) and please use wrapper number type google.protobuf.UInt32Value.
And Please add explict bool to enable this feature, like google.protobuf.BoolValue use_dynamic_descripters.
|
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions! |
|
@wbpcode PTAL. |
|
Thanks for this great contribution. I am so happy to see this happens. I think we are at correct way after a quick check. But I prefer to review and land this after #38197 because it will also simplify this PR. |
sure, I am anyways not assuming timer token bucket in this PR as per your previous suggestion so will not break anything in this PR rather will help simplify by allowing to remove few checks/validations. So totally agree on landing #38197 first. Thanks a lot!! |
|
/wait on the other PR. please main merge when it's ready for another pass, and it'll show back up in the oncall dashboard |
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
wbpcode
left a comment
There was a problem hiding this comment.
Thanks for the update. Some comments are added. And please check all the comments to ensure it match our style. Thanks again!
|
|
||
| // A flag to enable the usage of dynamic buckets for local rate limiting. For example, dynamically | ||
| // created token buckets for each unique value of a request header. | ||
| FALSE_RUNTIME_GUARD(envoy_reloadable_features_local_rate_limiting_with_dynamic_buckets); |
There was a problem hiding this comment.
I think runtime flag is unnecessary because dynamic descriptors is only enabled when the empty descriptor value is set which is not allowed in the past.
I we can treat this is a new feature that controlled by explict API configuration.
| using RateLimitTokenBucketSharedPtr = std::shared_ptr<RateLimitTokenBucket>; | ||
|
|
||
| class LocalRateLimiterImpl { | ||
| class LocalRateLimiterImpl : public Logger::Loggable<Logger::Id::rate_limit_quota> { |
There was a problem hiding this comment.
I think you are using incorrect logger id.
There was a problem hiding this comment.
added a new id local_rate_limit
There was a problem hiding this comment.
I think you forget to change this line code?
| // Actual number of dynamic descriptors will depend on the cardinality of unique values received from the http request for the omitted | ||
| // values. | ||
| // Minimum is 1. Default is 20. | ||
| google.protobuf.UInt32Value max_dynamic_descriptors = 18; |
| if (per_connection) { | ||
| throw EnvoyException( | ||
| "local rate descriptor value cannot be empty in per connection rate limit mode"); | ||
| } |
There was a problem hiding this comment.
Why this restriction is necessary?
There was a problem hiding this comment.
not necessary. Just wanted to reduce scope and cover it in follow up. I think it will require just passing max_dynamic_descriptors from filter config to PerConnectionRateLimiter.
Do you think it is must to cover PerConnectionRateLimiter also in this PR?
Pushed changes to cover this as well.
| if (lru_size == 0) { | ||
| throw EnvoyException("minimum allowed value for max_dynamic_descriptors is 1"); | ||
| } |
There was a problem hiding this comment.
This should be validated when loading the local rate limit configuration rather than doing it here.
There was a problem hiding this comment.
added PGV to cover this
| if (wildcard_found) { | ||
| DynamicDescriptorSharedPtr dynamic_descriptor = std::make_shared<DynamicDescriptor>( | ||
| per_descriptor_token_bucket, lru_size, dispatcher.timeSource()); |
There was a problem hiding this comment.
For the dynamic descriptor, the per_descriptor_token_bucket is unnecessary. I think you should created a DynamicDescriptor based on the per_descriptor_max_tokens, per_descriptor_tokens_per_fill, per_descriptor_fill_interval rather then keeping a meaningless per_descriptor_token_bucket.
The per_descriptor_token_bucket should only be created for the non-dynamic descriptors here.
| bool DynamicDescriptorMap::matchDescriptorEntries( | ||
| const std::vector<RateLimit::DescriptorEntry>& request_entries, | ||
| const std::vector<RateLimit::DescriptorEntry>& user_entries) { |
There was a problem hiding this comment.
The user_entries is a little unclear. It would be better to call it config_entries or something.
| bool has_empty_value = false; | ||
| for (size_t i = 0; i < request_entries.size(); ++i) { | ||
| // Check if the keys are equal | ||
| if (request_entries[i].key_ != user_entries[i].key_) { | ||
| return false; | ||
| } | ||
|
|
||
| // all non-blank user values must match the request values | ||
| if (!user_entries[i].value_.empty() && user_entries[i].value_ != request_entries[i].value_) { | ||
| return false; | ||
| } | ||
|
|
||
| // Check for empty value in user entries | ||
| if (user_entries[i].value_.empty()) { | ||
| has_empty_value = true; | ||
| } | ||
| } | ||
| return has_empty_value; |
There was a problem hiding this comment.
| bool has_empty_value = false; | |
| for (size_t i = 0; i < request_entries.size(); ++i) { | |
| // Check if the keys are equal | |
| if (request_entries[i].key_ != user_entries[i].key_) { | |
| return false; | |
| } | |
| // all non-blank user values must match the request values | |
| if (!user_entries[i].value_.empty() && user_entries[i].value_ != request_entries[i].value_) { | |
| return false; | |
| } | |
| // Check for empty value in user entries | |
| if (user_entries[i].value_.empty()) { | |
| has_empty_value = true; | |
| } | |
| } | |
| return has_empty_value; | |
| for (size_t i = 0; i < request_entries.size(); ++i) { | |
| // Check if the keys are equal. | |
| if (request_entries[i].key_ != user_entries[i].key_) { | |
| return false; | |
| } | |
| // Check values are equal or wildcard value is used. | |
| if (user_entries[i].value_.empty()) { | |
| continue; | |
| } | |
| if (request_entries[i].value_ != user_entries[i].value_) { | |
| return false; | |
| } | |
| } | |
| return true; |
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
a7d420f to
5527817
Compare
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
wbpcode
left a comment
There was a problem hiding this comment.
LGTM with very very minor comments. Thanks for update 🙏
| using RateLimitTokenBucketSharedPtr = std::shared_ptr<RateLimitTokenBucket>; | ||
|
|
||
| class LocalRateLimiterImpl { | ||
| class LocalRateLimiterImpl : public Logger::Loggable<Logger::Id::rate_limit_quota> { |
There was a problem hiding this comment.
I think you forget to change this line code?
| RateLimitTokenBucketSharedPtr | ||
| DynamicDescriptorMap::getBucket(const RateLimit::Descriptor request_descriptor) { | ||
| for (const auto& pair : user_descriptors_) { | ||
| auto user_descriptor = pair.first; |
| continue; | ||
| } | ||
|
|
||
| // we found a user configured wildcard descriptor that matches the request descriptor. |
There was a problem hiding this comment.
Please all check all these comments match our style.
There was a problem hiding this comment.
I have updated this specific comment. Is there a doc/readme that have guidelines about the comment styles for my reference?
Signed-off-by: Vikas Choudhary (vikasc) <choudharyvikas16@gmail.com>
Commit Message: LocalRateLimit(HTTP): Add dynamic token bucket support
Additional Description:
fixes: #23351 and #19895
User configures descriptors in the http local rate limit filter. These descriptors are the "target" to match using the source descriptors built using the traffic(http requests). Only matched traffic will be rate limited. When request comes, at runtime, based on rate_limit configuration, descriptors are generated where
valuesare picked from the request as directed by the rate_limit configuration. These generated descriptors are matched with "target"(user configured) descriptors. Generated descriptors are very flexible already in the sense that "values" from the request can be extracted using number of ways such as dynamic metadata, matcher api, computed reg expressions etc etc, but in "target"(user configured) descriptors are very rigid and it is expected that user statically configures the "values" in the descriptor.This PR is adding flexibility by allowing blank "values" in the user configured descriptors. Blank values will be treated as wildcard. Suppose descriptor entry key is
client-idand value is left blank by the user, local rate limit filter will create a descriptor dynamically for each unique value of headerclient-id. That meansclient1,client2and so on will have dedicated descriptors and token buckets.To keep a resource consumption under limit, LRU cache is maintained for dynamic descriptors with a default size of 20, which is configurable.
Docs Changes: TODO
Release Notes: TODO